Quantization error-based regularization for hardware-aware neural network training

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A conjugate gradient based method for Decision Neural Network training

Decision Neural Network is a new approach for solving multi-objective decision-making problems based on artificial neural networks. Using inaccurate evaluation data, network training has improved and the number of educational data sets has decreased. The available training method is based on the gradient decent method (BP). One of its limitations is related to its convergence speed. Therefore,...

متن کامل

SqueezeNext: Hardware-Aware Neural Network Design

One of the main barriers for deploying neural networks on embedded systems has been large memory and power consumption of existing neural networks. In this work, we introduce SqueezeNext, a new family of neural network architectures whose design was guided by considering previous architectures such as SqueezeNet, as well as by simulation results on a neural network accelerator. This new network...

متن کامل

Error Modeling in Distribution Network State Estimation Using RBF-Based Artificial Neural Network

State estimation is essential to access observable network models for online monitoring and analyzing of power systems. Due to the integration of distributed energy resources and new technologies, state estimation in distribution systems would be necessary. However, accurate input data are essential for an accurate estimation along with knowledge on the possible correlation between the real and...

متن کامل

OCReP: An Optimally Conditioned Regularization for pseudoinversion based neural training

In this paper we consider the training of single hidden layer neural networks by pseudoinversion, which, in spite of its popularity, is sometimes affected by numerical instability issues. Regularization is known to be effective in such cases, so that we introduce, in the framework of Tikhonov regularization, a matricial reformulation of the problem which allows us to use the condition number as...

متن کامل

Finite Precision Error Analysis of Neural Network Hardware Implementations

29 computation. On the other hand, for network learning, at least 14-16 bits of precision must be used for the weights to avoid having the training process divert too much from the trajectory of the high precision computation. References [1] D. Hammerstrom. A VLSI architecture for high-performance, low cost, on-chip learning. Figure 10: The average squared dierences between the desired and actu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Nonlinear Theory and Its Applications, IEICE

سال: 2018

ISSN: 2185-4106

DOI: 10.1587/nolta.9.453